W14. Unrestricted Computation
1. Theory
1.1 Grammar Rules and the Top of the Chomsky Hierarchy
1.1.1 Grammar as a Rewriting System
A formal grammar is a finite rewriting system used to generate strings of a language. It is written as
\[G = \langle V_N, V_T, P, S \rangle,\]
where \(V_N\) is the set of nonterminal symbols, \(V_T\) is the set of terminal symbols, \(P\) is the set of production rules, and \(S \in V_N\) is the start symbol. Terminals are the symbols that may appear in final generated strings; nonterminals are temporary symbols that still need to be rewritten.
A derivation begins with \(S\) and repeatedly applies productions until only terminals remain. If \(S \Rightarrow^* w\) and \(w \in V_T^*\), then \(w\) belongs to the language generated by the grammar:
\[L(G) = \{w \in V_T^* \mid S \Rightarrow^* w\}.\]
For context-sensitive and unrestricted grammars, the left-hand side of a rule may contain more than one symbol. This is the main difference from context-free grammars, where the left-hand side must be a single nonterminal.
1.1.2 Why Type 1 and Type 0 Matter
Regular and context-free grammars are powerful enough for many programming-language and parsing tasks, but they cannot express all natural counting constraints. For example, the language
\[\{a^n b^n c^n \mid n > 0\}\]
requires three equal blocks. A pushdown automaton has one stack, so it can naturally compare two dependent quantities, such as the number of \(a\)’s and \(b\)’s, but it cannot in general enforce three simultaneous equalities. Context-sensitive grammars can express such languages because their rules can rewrite symbols depending on neighboring context and can preserve enough structure to perform several coordinated counting operations.
Type-0 grammars are even more general. They correspond to Turing Machines and therefore capture exactly the recursively enumerable languages. They are not primarily a convenient programming notation; they are important because they identify the boundary of what can be generated or accepted by mechanical computation.
1.2 Context-Sensitive Grammars
1.2.1 Formal Definition
A context-sensitive grammar (CSG) is a grammar in which every production rule has the form
\[s_1 A s_2 \to s_1 \gamma s_2,\]
where \(A \in V_N\), \(s_1, s_2, \gamma \in (V_N \cup V_T)^*\), and \(\gamma \neq \epsilon\). The symbols \(s_1\) and \(s_2\) are the context: they must remain unchanged while the nonterminal \(A\) is rewritten as \(\gamma\).
The restriction \(\gamma \neq \epsilon\) means that ordinary context-sensitive rules cannot erase symbols. Equivalently, context-sensitive grammars are usually viewed as non-contracting grammars: no rule decreases the length of the current sentential form. The only standard exception is the rule \(S \to \epsilon\), which is allowed if \(S\) never appears on the right-hand side of any production.
This non-erasing behavior is not a minor technical detail. It is what connects CSGs to machines with bounded memory: every derivation preserves or increases length, so the grammar cannot hide unbounded computation by repeatedly deleting and recreating information.
1.2.2 Rule Order and Dead Ends
As with other grammar types, production rules are not applied in a predefined order. A grammar is valid if every word in the target language has at least one successful derivation and no word outside the language has a successful terminal derivation. Some choices of rule order may lead to a dead end containing nonterminals that can no longer be rewritten. That does not invalidate the grammar, because grammar derivation is nondeterministic: we only require the existence of a successful derivation for each valid word.
When constructing a CSG, it is helpful to separate rules by role:
- generation rules create the correct amount of symbolic material;
- permutation rules reorder nonterminals into the required block order;
- terminalization rules convert nonterminals into terminals once the structure is ready.
The common pitfall is allowing terminalization too early in a way that produces invalid terminal strings. If early terminalization only creates dead ends, the grammar may still be correct; if it creates a terminal word outside the intended language, the grammar is wrong.
In the solved construction tasks below, some adjacent swaps are written in the compact non-contracting form, such as \(DC \to CD\). This is the standard equivalent presentation of context-sensitive grammars. If the course requires the strict displayed form \(s_1 A s_2 \to s_1 \gamma s_2\), each such swap can be expanded by the same helper-symbol method used in the \(a^n b^n c^n\) example; for instance \(DC \to CD\) can be replaced by \(DC \to DZ\), \(DZ \to XZ\), \(XZ \to XD\), and \(XD \to CD\).
1.2.3 Example Pattern: Generating \(a^n b^n c^n\)
A standard CSG for
\[\{a^n b^n c^n \mid n > 0\}\]
uses \(B\) and \(C\) as obligations to later produce \(b\)’s and \(c\)’s. The rules are:
\[S \to aBC\] \[S \to aSBC\] \[CB \to CZ\] \[CZ \to XZ\] \[XZ \to XC\] \[XC \to BC\] \[aB \to ab\] \[bB \to bb\] \[bC \to bc\] \[cC \to cc\]
The first two rules generate one \(a\) for each pair \(BC\). Repeated use of \(S \to aSBC\) creates a mixed string such as \(aaaBCBCBC\). The middle rules simulate a swap of \(C\) past \(B\), eventually collecting all \(B\) symbols before all \(C\) symbols. The final rules convert the left-to-right block \(a^n B^n C^n\) into \(a^n b^n c^n\).
For example, for \(n = 3\):
\[S \Rightarrow aSBC \Rightarrow aaSBCBC \Rightarrow aaaBCBCBC \Rightarrow^* aaaBBBCCC \Rightarrow aaabBBCCC \Rightarrow aaabbBCCC \Rightarrow aaabbbCCC \Rightarrow aaabbbcCC \Rightarrow aaabbbccC \Rightarrow aaabbbccc.\]
The temporary symbols \(Z\) and \(X\) exist because a direct swap \(CB \to BC\) is not in the strict context-sensitive form \(s_1 A s_2 \to s_1 \gamma s_2\). The multi-step replacement performs the same reordering without decreasing length.
1.2.4 Linear Bounded Automata
Context-sensitive grammars correspond exactly to linear bounded automata (LBAs). An LBA is a nondeterministic Turing Machine whose head is restricted to the part of the tape occupied by the input, often delimited by end-markers. If the input has length \(n\), the working space is bounded by a linear function of \(n\).
The equivalence is:
\[\text{Context-Sensitive Grammars} = \text{Languages accepted by LBAs}.\]
Intuitively, an LBA has enough memory to mark, compare, and rearrange information across the whole input, but not enough to use arbitrary unbounded tape beyond the input. This places it strictly above pushdown automata and below unrestricted Turing Machines.
1.3 Unrestricted Grammars
1.3.1 Formal Definition
An unrestricted grammar is a grammar whose productions have the form
\[\alpha \to \beta,\]
where \(\alpha, \beta \in (V_N \cup V_T)^*\) and \(\alpha\) contains at least one nonterminal. Equivalently, \(\alpha \notin V_T^*\). The left-hand side cannot be empty, because allowing \(\epsilon \to \beta\) would allow symbols to appear from nothing without any control.
Unlike context-sensitive grammars, unrestricted grammars may shorten strings, delete markers, and rewrite whole configurations. This makes them powerful enough to simulate arbitrary Turing Machine computations.
1.3.2 Example Pattern: A Shorter Grammar for \(a^n b^n c^n\)
Because unrestricted grammars are not constrained by context-sensitive rule shape, the language
\[\{a^n b^n c^n \mid n > 0\}\]
can be generated more directly:
\[S \to aBC\] \[S \to aSBC\] \[CB \to BC\] \[aB \to ab\] \[bB \to bb\] \[bC \to bc\] \[cC \to cc\]
The only structural difference from the CSG version is the direct swap \(CB \to BC\). This rule is allowed in an unrestricted grammar because the left-hand side may be any nonempty string containing a nonterminal.
For \(n = 2\):
\[S \Rightarrow aSBC \Rightarrow aaBCBC \Rightarrow aaBBCC \Rightarrow aabBCC \Rightarrow aabbCC \Rightarrow aabbcC \Rightarrow aabbcc.\]
1.3.3 Unrestricted Grammars and Turing Machines
Unrestricted grammars generate exactly the languages accepted by nondeterministic Turing Machines:
\[\text{Unrestricted Grammars} = \text{NTM-recognizable Languages}.\]
Since deterministic and nondeterministic Turing Machines recognize the same class of languages, we also have:
\[\text{DTM} = \text{NTM} = \text{Unrestricted Grammars}\]
at the level of language recognition. A language in this class is called recursively enumerable or computably enumerable. If a string belongs to such a language, some Turing Machine eventually accepts it. If the string does not belong, the machine may reject or may run forever.
1.4 Nondeterministic Turing Machines
1.4.1 Transition Function
A nondeterministic Turing Machine has the same components as a deterministic one, but its transition function returns a finite set of possible moves instead of one move. In the multi-tape notation used in the course, the deterministic transition type is
\[\delta : (Q - F) \times (I \cup \{\_\}) \times (\Gamma \cup \{\_\})^k \to Q \times (\Gamma \cup \{\_\})^k \times \{R,L,S\}^{k+1}.\]
For a nondeterministic Turing Machine, the transition function becomes
\[\delta : (Q - F) \times (I \cup \{\_\}) \times (\Gamma \cup \{\_\})^k \to \mathbb{P}_F\!\left(Q \times (\Gamma \cup \{\_\})^k \times \{R,L,S\}^{k+1}\right),\]
where \(\mathbb{P}_F\) denotes the finite powerset. Thus, from one configuration, the machine may have zero, one, or many possible next configurations.
1.4.2 Acceptance and Equivalence with DTMs
The acceptance condition is existential: an NTM accepts an input if at least one computation branch reaches an accepting state. Branches that reject or run forever do not matter if another branch accepts.
Nondeterminism does not increase the class of languages recognized by Turing Machines. A deterministic Turing Machine can simulate an NTM by exploring the computation tree breadth-first. Breadth-first simulation is important because a depth-first simulation could get stuck forever on one infinite branch and never discover an accepting branch at a finite depth.
Therefore, deterministic and nondeterministic Turing Machines have the same recognition power. The difference is not what they can recognize, but how naturally and efficiently some computations can be described.
1.5 Chomsky Hierarchy Summary
The Chomsky hierarchy classifies grammars by the form of their production rules. The four classes are nested:
\[\text{Regular} \subsetneq \text{Context-Free} \subsetneq \text{Context-Sensitive} \subsetneq \text{Recursively Enumerable}.\]
| Chomsky type | Grammar | Rule form | Language class | Minimal automaton |
|---|---|---|---|---|
| Type 3 | Regular | \(A \to b\) or \(A \to bB\) | Regular | Finite State Automaton |
| Type 2 | Context-free | \(A \to \beta\) | Context-free | Nondeterministic Pushdown Automaton |
| Type 1 | Context-sensitive | \(s_1 A s_2 \to s_1 \gamma s_2\), \(\gamma \neq \epsilon\) | Context-sensitive | Linear Bounded Automaton |
| Type 0 | Unrestricted | \(\alpha \to \beta\), \(\alpha \notin V_T^*\) | Recursively enumerable | Turing Machine |
The table should be read from bottom to top in terms of restriction and from top to bottom in terms of expressive power. Type 3 is the most restrictive and easiest to decide algorithmically. Type 0 is the least restrictive and reaches the full power of Turing-computable recognition.
1.6 The Halting Problem
1.6.1 Statement
The Halting Problem asks whether there is one general algorithm that, given an arbitrary program and an arbitrary input to that program, always decides whether the program eventually stops or runs forever. For a particular program and a particular input, we can often answer this question by inspection, testing, or mathematical reasoning. The impossibility result says something stronger: there is no universal method that works for all programs and all inputs.
In Turing Machine language, assume each machine has a Gödel number, or index, \(y\), and let \(f_y(x)\) be the partial function computed by the machine with index \(y\) on input \(x\). The value \(f_y(x) = \bot\) means that the computation is undefined because the machine does not halt. The hypothetical halting decider would compute the total function
\[g(y,x) = \begin{cases} 1, & f_y(x) \neq \bot,\\ 0, & f_y(x) = \bot. \end{cases}\]
The word total is crucial: a decider must always halt with a yes-or-no answer. Running the program and waiting is not enough, because if the program does not halt, the waiting procedure also does not halt.
1.6.2 Why the Halting Problem Is Undecidable
The proof is a diagonalization argument based on self-reference. Suppose, for contradiction, that the total function \(g\) is computable. Define
\[h(x) = \begin{cases} 1, & g(x,x) = 0,\\ \bot, & g(x,x) = 1. \end{cases}\]
If \(g\) were computable, then \(h\) would also be computable, because it simply calls \(g\) and then either halts with value \(1\) or loops. Therefore \(h\) would have some machine index \(x_0\), so \(h = f_{x_0}\).
Now evaluate \(h(x_0)\). If \(g(x_0,x_0)=0\), then by definition of \(g\), \(f_{x_0}(x_0)\) does not halt; but by definition of \(h\), \(h(x_0)=1\), so it does halt. Contradiction. If \(g(x_0,x_0)=1\), then by definition of \(g\), \(f_{x_0}(x_0)\) halts; but by definition of \(h\), \(h(x_0)=\bot\), so it does not halt. Contradiction again. Hence the assumed computable decider \(g\) cannot exist.
This is why no program can solve the halting question for every possible program and input. By the Church-Turing thesis, the impossibility for Turing Machines applies to ordinary computer programs as well.
1.6.3 Static Analysis and Self-Reference
The Halting Problem is a negative result about static analysis of a behavioral property. A program analyzer tries to answer facts about behavior before or without fully executing the program. Some static checks are decidable: for example, whether an arithmetic expression is well parenthesized can be decided by a pushdown automaton. But a universal static analyzer for nontermination cannot exist.
The intuitive programming proof follows the same self-reference. Assume a procedure halts can always decide whether p(x) halts. Then define a program that asks whether a program halts on itself and deliberately does the opposite: if the analyzer says it halts, loop forever; otherwise halt. Running that new program on itself produces a contradiction. This is not a quirk of a programming language; it is the same diagonalization argument expressed operationally.
1.6.4 Physical Computers and the Mathematical Model
Physical computers are finite physical systems. In the real world, a running program may eventually stop because of power loss, memory corruption, hardware failure, or the end of the physical system. The Halting Problem is not about these external accidents. It is about mathematical program behavior under the abstract machine model: does the computation terminate according to its own rules?
This distinction matters because theoretical computer science studies the limits of algorithms, not the reliability of hardware. A physical crash is not a mathematical solution to nontermination.
1.7 Rice’s Theorem and Software Verification
1.7.1 Semantic Program Properties
Rice’s theorem says that every nontrivial semantic property of programs is undecidable. A property is semantic if it concerns what the program computes or how it behaves, rather than surface syntax. A property is nontrivial if some programs have it and some programs do not.
Examples of semantic properties include “this program halts on every input”, “this program ever prints zero”, or “this program computes the same function as a given specification”. Rice’s theorem says there is no algorithm that always gives a correct yes-or-no answer for all programs for any such nontrivial behavioral property.
1.7.2 Consequences for Verification
Rice’s theorem explains why fully automatic software verification is impossible in the strongest possible sense. Verification tools must work around undecidability by using restrictions, approximations, annotations, type systems, bounded search, proof obligations, or conservative analyses.
This does not make verification useless. It means a verification tool must give up at least one ideal requirement: it may reject some correct programs, fail to prove some true property, require human guidance, restrict the programming language, or analyze only bounded executions. The fundamental impossibility is what forces verification to be engineering rather than a single complete automatic solution.
1.8 Decidability, Semidecidability, and Recursive Sets
1.8.1 Decision Problems
A decision problem is a problem with two possible answers: yes or no. Many computational questions can be phrased as membership questions: given an encoded object \(x\), does \(x\) belong to a set \(S \subseteq \mathbb{N}\)?
Examples include:
- Does a program terminate on a given input?
- Given a graph \(G\) and a set of vertices \(K\), is \(K\) a clique?
- Given axioms, rules, and a formula, is the formula provable?
This set-based viewpoint lets us define decidability using characteristic functions.
1.8.2 Recursive Sets
The characteristic function of a set \(S \subseteq \mathbb{N}\) is
\[c_S(x) = \begin{cases} 1, & x \in S,\\ 0, & x \notin S. \end{cases}\]
A set \(S\) is recursive, also called decidable, if and only if \(c_S\) is computable by a total algorithm. In other words, membership in \(S\) can always be decided in finite time.
Historically, computability theory was called recursion theory. The class of \(\mu\)-recursive functions was later shown to coincide with the class of functions computable by Turing Machines, which is one of the foundations behind the Church-Turing thesis.
1.8.3 Recursively Enumerable Sets
A set \(S\) is recursively enumerable (RE), also called semidecidable, if either \(S = \emptyset\) or \(S\) is the image of a total computable function \(g_S\):
\[S = \{g_S(0), g_S(1), g_S(2), \ldots\}.\]
This means that there is an effective enumeration of all elements of \(S\). If \(x \in S\), then by enumerating the values of \(g_S\), we eventually find \(x\) and can answer yes. If \(x \notin S\), the enumeration may continue forever, so we may never obtain a no answer.
The Halting Problem is the standard example: it is semidecidable because if a program halts, simulation eventually discovers that fact. It is not decidable because if the program does not halt, simulation gives no finite certificate of nontermination in general.
1.8.4 Recursive versus Recursively Enumerable
Every recursive set is recursively enumerable, but not every recursively enumerable set is recursive. Decidable is stronger than semidecidable because a decider must answer both yes and no, while a semidecision procedure only guarantees yes answers.
The key characterization is:
\[S \text{ is recursive} \iff S \text{ is RE and } \overline{S} \text{ is RE}.\]
Here \(\overline{S} = \mathbb{N} - S\) is the complement of \(S\). Intuitively, if both \(S\) and its complement can be semidecided, then we can run both semidecision procedures in parallel. Exactly one must eventually answer yes, and that tells us whether \(x \in S\) or \(x \notin S\).
The corollary is that decidable sets are closed under complement. This mirrors earlier closure discussions: for limited models, complement closure is often a sign that both yes and no cases are effectively controllable.
1.8.5 The Landscape of Language Classes
The computability landscape extends the Chomsky hierarchy. Regular languages sit inside context-free languages, which sit inside context-sensitive languages. Context-sensitive languages are decidable because LBAs have bounded configuration spaces. Decidable languages sit inside recursively enumerable languages. Outside the recursively enumerable languages are languages that cannot even be recognized by a Turing Machine.
This hierarchy explains the phrase “with great power comes great uncomputability”. Turing Machines and Turing-complete programming languages are powerful because they permit unbounded computation, including loops. That same feature makes universal termination and many behavioral properties undecidable.
1.9 Total Computable Functions and Expressive Limits
The lecture ends with a deeper limitation: there is no recursively enumerable formalism that defines all and only the total computable functions. Let \(S\) be the set of indexes of all total computable functions, with two intended properties:
- if \(i \in S\), then \(f_i\) is total;
- if \(f\) is total and computable, then some \(i \in S\) computes \(f\).
Such a set \(S\) is not recursively enumerable. The proof is by diagonalization. If all and only total computable functions could be effectively enumerated, one could construct a new total computable function that differs from the \(i\)-th enumerated function on input \(i\), contradicting completeness of the enumeration.
The consequence is practical. Finite-state automata define only total computations, but not all total computable functions. Turing Machines define all computable functions, but also include partial functions that may not terminate. A Turing-complete language such as C can express every algorithmic behavior, including nonterminating behavior. There cannot be a recursively enumerable subset of C programs that captures exactly all terminating programs and excludes all nonterminating ones.
2. Definitions
- Grammar: A 4-tuple \(G = \langle V_N, V_T, P, S \rangle\) consisting of nonterminals, terminals, production rules, and a start symbol.
- Terminal symbol: A symbol that may appear in final generated strings and is not rewritten further.
- Nonterminal symbol: A temporary grammar symbol that can be rewritten using production rules.
- Production rule: A rewriting rule \(\alpha \to \beta\) that replaces an occurrence of \(\alpha\) by \(\beta\) during a derivation.
- Derivation: A sequence of rewriting steps starting from \(S\) and ending, if successful, in a terminal string.
- Context-sensitive grammar: A Type-1 grammar whose rules have the form \(s_1 A s_2 \to s_1 \gamma s_2\) with \(\gamma \neq \epsilon\), except possibly \(S \to \epsilon\) under the standard restriction.
- Non-contracting grammar: A grammar in which no production decreases the length of the current sentential form.
- Context: The surrounding strings \(s_1\) and \(s_2\) in a rule \(s_1 A s_2 \to s_1 \gamma s_2\).
- Linear bounded automaton (LBA): A nondeterministic Turing Machine whose usable tape is bounded by the input region; LBAs recognize exactly the context-sensitive languages.
- Unrestricted grammar: A Type-0 grammar with productions \(\alpha \to \beta\) where \(\alpha\) contains at least one nonterminal and \(\beta\) is arbitrary.
- Nondeterministic Turing Machine (NTM): A Turing Machine whose transition function returns a finite set of possible next moves.
- Recursively enumerable language: A language accepted by a Turing Machine; members are eventually accepted, while non-members may cause nontermination.
- Chomsky hierarchy: The nested classification of grammar and language classes into regular, context-free, context-sensitive, and recursively enumerable.
- Dead end: A derivation branch that reaches a sentential form containing nonterminals but no useful rule sequence to a terminal word.
- Terminalization rule: A rule that converts a nonterminal marker into a terminal symbol.
- Halting Problem: The decision problem asking whether an arbitrary program or Turing Machine halts on an arbitrary input.
- Gödel number: A natural-number encoding of a program or Turing Machine, allowing machines and programs to be treated as data.
- Partial function: A function that may be undefined on some inputs, often because the corresponding computation does not halt.
- Total function: A function defined on every input in its domain; a decider computes a total yes-or-no function.
- Diagonalization: A proof technique that constructs an object differing from every object in a supposed complete enumeration, often by making a machine analyze itself.
- Static analysis: Analysis of program behavior without relying on fully executing the program on all possible runs.
- Rice’s theorem: The theorem stating that every nontrivial semantic property of programs is undecidable.
- Semantic property: A property about what a program computes or how it behaves, not merely how its text is written.
- Nontrivial property: A property that holds for at least one program and fails for at least one program.
- Decision problem: A computational problem whose answer is yes or no.
- Recursive set: A decidable set whose characteristic function is computable and total.
- Characteristic function: The function \(c_S\) that returns \(1\) for members of \(S\) and \(0\) for nonmembers.
- Semidecidable problem: A problem for which there is an algorithm that halts with yes on positive instances but may run forever on negative instances.
- Complement of a set: The set \(\overline{S} = \mathbb{N} - S\) containing exactly the natural numbers not in \(S\).
3. Formulas
- Grammar tuple: \[G = \langle V_N, V_T, P, S \rangle\]
- Generated language: \[L(G) = \{w \in V_T^* \mid S \Rightarrow^* w\}\]
- Context-sensitive rule form: \[s_1 A s_2 \to s_1 \gamma s_2,\qquad \gamma \neq \epsilon\]
- Unrestricted rule form: \[\alpha \to \beta,\qquad \alpha,\beta \in (V_N \cup V_T)^*,\quad \alpha \notin V_T^*\]
- NTM transition function: \[\delta : (Q - F) \times (I \cup \{\_\}) \times (\Gamma \cup \{\_\})^k \to \mathbb{P}_F\!\left(Q \times (\Gamma \cup \{\_\})^k \times \{R,L,S\}^{k+1}\right)\]
- Chomsky hierarchy containment: \[\text{Regular} \subsetneq \text{Context-Free} \subsetneq \text{Context-Sensitive} \subsetneq \text{Recursively Enumerable}\]
- Halting decider target function: \[g(y,x) = \begin{cases} 1, & f_y(x) \neq \bot,\\ 0, & f_y(x) = \bot \end{cases}\]
- Diagonal halting contradiction function: \[h(x) = \begin{cases} 1, & g(x,x) = 0,\\ \bot, & g(x,x) = 1 \end{cases}\]
- Characteristic function of a set: \[c_S(x) = \begin{cases} 1, & x \in S,\\ 0, & x \notin S \end{cases}\]
- Recursively enumerable set as an image: \[S = \{g_S(0), g_S(1), g_S(2), \ldots\}\]
- Recursive iff set and complement are RE: \[S \text{ is recursive} \iff S \text{ is RE and } \overline{S} \text{ is RE}\]
4. Practice
4.1. Generate \(a^n b^n c^n\) with a Context-Sensitive Grammar (Lab 12, Example 1)
Generate
\[L = \{a^n b^n c^n \mid n > 0\}\]
using a context-sensitive grammar.
Click to see the solution
Key Concept: First generate one \(a\) and one pair of obligations \(BC\) for each value of \(n\), then reorder all \(B\) symbols before all \(C\) symbols, then convert the markers to terminals.
Use the grammar:
\[S \to aBC\] \[S \to aSBC\] \[CB \to CZ\] \[CZ \to XZ\] \[XZ \to XC\] \[XC \to BC\] \[aB \to ab\] \[bB \to bb\] \[bC \to bc\] \[cC \to cc\]
- Generate the correct number of obligations. For \(n = 3\): \[S \Rightarrow aSBC \Rightarrow aaSBCBC \Rightarrow aaaBCBCBC.\] There are three \(a\)’s, three \(B\)’s, and three \(C\)’s.
- Move \(B\) symbols left of \(C\) symbols. The sequence \[CB \Rightarrow CZ \Rightarrow XZ \Rightarrow XC \Rightarrow BC\] behaves like a context-sensitive swap of \(CB\) into \(BC\).
- After enough swaps, the mixed suffix becomes: \[aaaBCBCBC \Rightarrow^* aaaBBBCCC.\]
- Convert \(B\) symbols into \(b\) symbols from left to right: \[aaaBBBCCC \Rightarrow aaabBBCCC \Rightarrow aaabbBCCC \Rightarrow aaabbbCCC.\]
- Convert \(C\) symbols into \(c\) symbols from left to right: \[aaabbbCCC \Rightarrow aaabbbcCC \Rightarrow aaabbbccC \Rightarrow aaabbbccc.\]
Answer: The grammar generates exactly \(a^n b^n c^n\) for \(n > 0\).
4.2. Construct a CSG for \(a^i b^j c^i d^j\) (Lab 12, Task 1)
Define a context-sensitive grammar for
\[L_1 = \{a^i b^j c^i d^j \mid i,j \geq 1\}.\]
Click to see the solution
Key Concept: The number of \(c\)’s must match the number of \(a\)’s, and the number of \(d\)’s must match the number of \(b\)’s. We generate \(C\) markers for the \(c\) block and \(D\) markers for the \(d\) block, then swap \(D\) past \(C\) until the order is \(a^i b^j C^i D^j\).
Use terminals \(\{a,b,c,d\}\) and nonterminals \(\{S,T,C,D\}\). The grammar is:
\[S \to aSC\] \[S \to aTC\] \[T \to bTD\] \[T \to bD\] \[DC \to CD\] \[bC \to bc\] \[cC \to cc\] \[cD \to cd\] \[dD \to dd\]
- Generate the \(a\) and \(C\) counts. Applying \(S \to aSC\) repeatedly and then \(S \to aTC\) produces: \[S \Rightarrow a^{i-1}SC^{i-1} \Rightarrow a^iTC^i.\]
- Generate the \(b\) and \(D\) counts. Applying \(T \to bTD\) repeatedly and then \(T \to bD\) produces: \[a^iTC^i \Rightarrow a^i b^j D^j C^i.\]
- Reorder the marker blocks. The rule \(DC \to CD\) moves every \(D\) to the right of every \(C\): \[a^i b^j D^j C^i \Rightarrow^* a^i b^j C^i D^j.\]
- Terminalize the \(C\) markers. The first \(C\) is after a \(b\), so \(bC \to bc\) starts the conversion. Then \(cC \to cc\) converts the rest: \[a^i b^j C^i D^j \Rightarrow^* a^i b^j c^i D^j.\]
- Terminalize the \(D\) markers. The first \(D\) is after a \(c\), so \(cD \to cd\) starts the conversion. Then \(dD \to dd\) converts the rest: \[a^i b^j c^i D^j \Rightarrow^* a^i b^j c^i d^j.\]
For example, with \(i = 2\) and \(j = 3\):
\[S \Rightarrow aSC \Rightarrow aaTCC \Rightarrow aabTDCC \Rightarrow aabbTDDCC \Rightarrow aabbbDDDCC \Rightarrow^* aabbbCCDDD \Rightarrow aabbbccddd.\]
Answer: The grammar above generates exactly \(\{a^i b^j c^i d^j \mid i,j \geq 1\}\).
4.3. Construct a CSG for Doubled Words \(WW\) (Lab 12, Task 2)
Define a context-sensitive grammar for
\[L_2 = \{WW \mid W \in \{a,b\}^*\}.\]
Click to see the solution
Key Concept: Generate the first copy using terminals and the second copy using matching uppercase markers. The markers are initially interleaved with the first copy. Then move all uppercase markers to the right, preserving their order, and finally convert them into terminals.
Use terminals \(\{a,b\}\) and nonterminals \(\{S_0,T,A,B\}\). Let \(A\) mean “the second-copy symbol corresponding to \(a\)” and \(B\) mean “the second-copy symbol corresponding to \(b\)”. The start symbol is \(S_0\).
The grammar is:
\[S_0 \to \epsilon\] \[S_0 \to T\] \[T \to aAT\] \[T \to bBT\] \[T \to aA\] \[T \to bB\] \[Aa \to aA\] \[Ab \to bA\] \[Ba \to aB\] \[Bb \to bB\] \[AA \to aA\] \[AB \to aB\] \[BA \to bA\] \[BB \to bB\] \[A \to a\] \[B \to b\]
- Generate interleaved pairs. If \(W = abba\), then: \[S_0 \Rightarrow T \Rightarrow aAT \Rightarrow aAbBT \Rightarrow aAbBbBT \Rightarrow aAbBbBaA.\] The lowercase symbols form the first copy in order, and the uppercase symbols form the second copy markers in the same order.
- Move uppercase markers right across lowercase symbols. The rules \(Aa \to aA\), \(Ab \to bA\), \(Ba \to aB\), and \(Bb \to bB\) move each marker rightward without changing its identity. \[aAbBbBaA \Rightarrow^* abbaABBA.\]
- Convert the uppercase block left to right. The pair rules \(AA \to aA\), \(AB \to aB\), \(BA \to bA\), and \(BB \to bB\) convert an uppercase symbol only when another uppercase symbol follows it. The final \(A \to a\) or \(B \to b\) converts the last marker: \[abbaABBA \Rightarrow abba aBBA \Rightarrow abba abBA \Rightarrow abba abbA \Rightarrow abba abba.\]
The rule \(S_0 \to \epsilon\) handles \(W = \epsilon\), so \(\epsilon \in L_2\). The special empty-string rule is legal because \(S_0\) never appears on the right-hand side of any rule.
Answer: The grammar generates exactly all doubled words \(WW\) over \(\{a,b\}\).
4.4. Generate \(a^n b^n c^n\) with an Unrestricted Grammar (Lab 12, Example 2)
Generate
\[L = \{a^n b^n c^n \mid n > 0\}\]
using an unrestricted grammar.
Click to see the solution
Key Concept: In an unrestricted grammar, the direct swap \(CB \to BC\) is allowed, so the construction is shorter than the strict context-sensitive version.
Use:
\[S \to aBC\] \[S \to aSBC\] \[CB \to BC\] \[aB \to ab\] \[bB \to bb\] \[bC \to bc\] \[cC \to cc\]
For \(n = 2\):
\[S \Rightarrow aSBC \Rightarrow aaBCBC.\]
Now swap the middle \(CB\):
\[aaBCBC \Rightarrow aaBBCC.\]
Convert \(B\) markers:
\[aaBBCC \Rightarrow aabBCC \Rightarrow aabbCC.\]
Convert \(C\) markers:
\[aabbCC \Rightarrow aabbcC \Rightarrow aabbcc.\]
Answer: The grammar generates exactly \(a^n b^n c^n\) for \(n > 0\).
4.5. Construct an Unrestricted Grammar for Powers of Two (Lab 12, Task 3)
Generate an unrestricted grammar for
\[L_1 = \{a^i \mid i = 2^k,\ k > 0\}.\]
Click to see the solution
Key Concept: Start with two \(A\) markers. Each iteration doubles the number of \(A\) markers. Because the grammar is unrestricted, we may use control markers and erase them during the final cleanup.
Use terminals \(\{a\}\) and nonterminals \(\{S,L,R,D,E,F,A\}\). The grammar is:
\[S \to LRAAE\] \[LR \to LD\] \[DA \to AAD\] \[DE \to RE\] \[AR \to RA\] \[LR \to F\] \[FA \to aF\] \[FE \to \epsilon\]
Here \(L\) and \(E\) are left and right boundary markers. \(D\) is a doubling head moving right. \(R\) is a return head moving left. \(F\) is a finalization marker.
- Initial configuration. \[S \Rightarrow LRAAE.\] If we immediately finalize by \(LR \to F\), we get: \[LRAAE \Rightarrow FAAE \Rightarrow aFAE \Rightarrow aaFE \Rightarrow aa.\] This gives \(a^2\), corresponding to \(k = 1\).
- Start one doubling pass. \[LRAAE \Rightarrow LDAAE.\]
- Double each \(A\) while moving right. \[LDAAE \Rightarrow LAADAE \Rightarrow LAAAAD E.\] More clearly, each rule application \(DA \to AAD\) consumes one old \(A\) and leaves two \(A\) markers behind it.
- When \(D\) reaches the right boundary, change direction. \[LAAAAD E \Rightarrow LAAAAR E.\]
- Move \(R\) back to the left boundary. \[LAAAARE \Rightarrow LAAARAE \Rightarrow LAARAAE \Rightarrow LARAAAE \Rightarrow LRAAAAE.\]
- Either double again or finalize. Repeating the pass doubles \(4\) to \(8\), then \(16\), and so on. Finalization converts all \(A\) markers into \(a\): \[LRAAAAE \Rightarrow FAAAAE \Rightarrow aFAAAE \Rightarrow aaFAAE \Rightarrow aaaFAE \Rightarrow aaaaFE \Rightarrow aaaa.\]
Every successful cycle doubles the number of \(A\) markers, and the only initial number is \(2\). Therefore the terminal lengths are exactly \(2,4,8,16,\ldots\).
Answer: The grammar above generates \(\{a^{2^k} \mid k > 0\}\).
4.6. Construct an Unrestricted Grammar for \(a^n b^m c^n d^m\) (Lab 12, Task 4)
Generate an unrestricted grammar for
\[L_2 = \{a^n b^m c^n d^m \mid n > 0,\ m > 0\}.\]
Click to see the solution
Key Concept: This is the same counting pattern as \(a^i b^j c^i d^j\): \(c\) counts match \(a\) counts, and \(d\) counts match \(b\) counts.
Use:
\[S \to aSC\] \[S \to aTC\] \[T \to bTD\] \[T \to bD\] \[DC \to CD\] \[bC \to bc\] \[cC \to cc\] \[cD \to cd\] \[dD \to dd\]
For \(n = 2\) and \(m = 2\):
\[S \Rightarrow aSC \Rightarrow aaTCC \Rightarrow aabTDCC \Rightarrow aabbDDCC.\]
Move \(D\) markers to the right:
\[aabbDDCC \Rightarrow aabbDCDC \Rightarrow aabbCDDC \Rightarrow aabbCDCD \Rightarrow aabbCCDD.\]
Convert markers:
\[aabbCCDD \Rightarrow aabbcCDD \Rightarrow aabbccDD \Rightarrow aabbccdD \Rightarrow aabbccdd.\]
Answer: The grammar generates exactly \(a^n b^m c^n d^m\) with \(n,m > 0\).
4.7. Construct a CSG for Equal Numbers of \(a\), \(b\), and \(c\) (Homework 12, Task 1)
Define a context-sensitive grammar for
\[L_3 = \{W \in \{a,b,c\}^* \mid \#(a) = \#(b) = \#(c)\ \text{and}\ \#(a) \geq 1\}.\]
Click to see the solution
Key Concept: First generate one \(A\), one \(B\), and one \(C\) for each required triple. Then use swaps to place the markers in any desired order, and finally convert markers into terminals. Since each generated unit contains exactly one of each marker, every terminal word has equal counts.
Use terminals \(\{a,b,c\}\) and nonterminals \(\{S,A,B,C\}\). The grammar is:
\[S \to ABC\] \[S \to SABC\] \[AB \to BA \qquad BA \to AB\] \[AC \to CA \qquad CA \to AC\] \[BC \to CB \qquad CB \to BC\] \[A \to a\] \[B \to b\] \[C \to c\]
- Generate equal counts. After \(n-1\) uses of \(S \to SABC\) and one use of \(S \to ABC\), the sentential form contains exactly \(n\) copies of \(A\), \(n\) copies of \(B\), and \(n\) copies of \(C\).
- Permute markers arbitrarily. The adjacent swap rules let us rearrange any sequence of \(A\), \(B\), and \(C\) markers into any other sequence with the same multiset of markers.
- Match a target word. Suppose the target word is \(cababc\). It has two \(a\)’s, two \(b\)’s, and two \(c\)’s. Generate: \[S \Rightarrow SABC \Rightarrow ABCABC.\] Then use swaps to obtain: \[ABCABC \Rightarrow^* CABA BC.\] Finally convert: \[CABA BC \Rightarrow cababc.\]
Every generated terminal word has equal numbers of \(a\), \(b\), and \(c\) because markers are only created in triples. Conversely, any word with equal positive counts can be reached by generating enough triples, permuting the markers into that word’s order, and terminalizing.
Answer: The grammar generates exactly all nonempty strings over \(\{a,b,c\}\) with equal counts of \(a\), \(b\), and \(c\).
4.8. Construct an Unrestricted Grammar for \(a^n b^{2n} c^{3n}\) (Homework 12, Task 2)
Generate an unrestricted grammar for
\[L_3 = \{a^n b^{2n} c^{3n} \mid n \geq 1\}.\]
Click to see the solution
Key Concept: For every generated \(a\), create two \(B\) markers and three \(C\) markers. Then move all \(B\) markers before all \(C\) markers and terminalize.
Use terminals \(\{a,b,c\}\) and nonterminals \(\{S,B,C\}\). The grammar is:
\[S \to aBBCCC\] \[S \to aSBBCCC\] \[CB \to BC\] \[aB \to ab\] \[bB \to bb\] \[bC \to bc\] \[cC \to cc\]
- Generate the markers. For \(n = 2\): \[S \Rightarrow aSBBCCC \Rightarrow aaBBCCCBBCCC.\] This contains two \(a\)’s, four \(B\) markers, and six \(C\) markers.
- Move all \(B\) markers left of all \(C\) markers. Repeatedly apply \(CB \to BC\): \[aaBBCCCBBCCC \Rightarrow^* aaBBBBCCCCCC.\]
- Convert \(B\) markers into \(b\) symbols: \[aaBBBBCCCCCC \Rightarrow aabBBBCCCCCC \Rightarrow aabbBBCCCCCC \Rightarrow aabbbBCCCCCC \Rightarrow aabbbbCCCCCC.\]
- Convert \(C\) markers into \(c\) symbols: \[aabbbbCCCCCC \Rightarrow aabbbbcCCCCC \Rightarrow^* aabbbbcccccc.\]
The construction creates exactly two \(B\) markers and three \(C\) markers per \(a\), so every terminal word has the form \(a^n b^{2n} c^{3n}\).
Answer: The grammar generates exactly \(\{a^n b^{2n} c^{3n} \mid n \geq 1\}\).
4.9. Prove the Halting Problem Is Undecidable (Lecture 13, Example 1)
Prove that no Turing Machine can decide, for every machine index \(y\) and every input \(x\), whether machine \(y\) halts on \(x\).
Click to see the solution
Key Concept: Assume a perfect halting decider exists, then use it to build a machine that contradicts itself on its own index.
Assume the opposite of what we want to prove. Suppose there is a Turing Machine computing the total function \[g(y,x) = \begin{cases} 1, & f_y(x) \neq \bot,\\ 0, & f_y(x) = \bot. \end{cases}\] Here \(f_y(x) \neq \bot\) means that machine \(y\) halts on input \(x\).
Build the diagonal function. Define \[h(x) = \begin{cases} 1, & g(x,x) = 0,\\ \bot, & g(x,x) = 1. \end{cases}\] This function asks what machine \(x\) does on input \(x\) and then behaves oppositely: it halts when \(x\) does not halt on itself, and it loops when \(x\) halts on itself.
Use computability closure. If \(g\) is computable, then \(h\) is computable too, because \(h\) only calls \(g\) and then performs a simple branch.
Assign an index to \(h\). Since \(h\) is computable, there is some machine index \(x_0\) such that \[h = f_{x_0}.\]
Evaluate on the self-input \(x_0\). There are two cases.
If \(g(x_0,x_0)=0\), then by definition of \(g\), \(f_{x_0}(x_0)=\bot\). But by definition of \(h\), \(h(x_0)=1\). Since \(h=f_{x_0}\), this says the same computation both does not halt and does halt.
If \(g(x_0,x_0)=1\), then by definition of \(g\), \(f_{x_0}(x_0)\neq \bot\). But by definition of \(h\), \(h(x_0)=\bot\). Again, the same computation both halts and does not halt.
Conclude. Both possible outputs of \(g(x_0,x_0)\) lead to contradiction. Therefore the original assumption that \(g\) is computable is false.
Answer: The Halting Problem is undecidable: no total computable function can correctly decide halting for all program-input pairs.
4.10. Explain the Programming-Style Halting Contradiction (Lecture 13, Example 2)
Use the intuitive program notation from the lecture to explain why a universal procedure halts cannot exist.
Click to see the solution
Key Concept: A program analyzer that works for all programs must also analyze programs that call the analyzer, including themselves.
Assume a perfect analyzer. Suppose
halts(p(x))always answers whether program \(p\) halts on input \(x\):javascript halts(p(x)) = if magical_analysis(p(x)) then yes else noSpecialize it to self-input. Define:
javascript halts_on_self(p) = if halts(p(p)) then yes else noThis asks whether program \(p\) halts when given its own code as input.Build the opposite program. Define:
javascript trouble(p) = if halts_on_self(p) then loop forever else yesIf \(p\) halts on itself,troubleloops. If \(p\) does not halt on itself,troublehalts.Run
troubleon itself. Considertrouble(trouble).If
halts_on_self(trouble)says yes, thentrouble(trouble)loops forever by its own definition. So the yes answer was wrong.If
halts_on_self(trouble)says no, thentrouble(trouble)returns yes and halts. So the no answer was wrong.Conclude. The assumed perfect analyzer must fail on this self-referential input.
Answer: magical_analysis cannot exist, because any universal halting analyzer can be forced into contradiction by a program that does the opposite of the analyzer’s prediction on itself.
4.11. Prove That Every Recursive Set Is Recursively Enumerable (Lecture 13, Example 3)
Prove Theorem 1: if \(S\) is recursive, then \(S\) is recursively enumerable.
Click to see the solution
Key Concept: A recursive set has a computable membership test. Use that test to define a total computable function whose image is exactly the set.
- Handle the empty set. If \(S=\emptyset\), then \(S\) is RE by definition in the lecture.
- Assume \(S\) is nonempty. Since \(S \neq \emptyset\), choose some fixed \(k \in S\).
- Use the characteristic function. Since \(S\) is recursive, its characteristic function is total and computable: \[c_S(x) = \begin{cases} 1, & x \in S,\\ 0, & x \notin S. \end{cases}\]
- Define an enumerating function. \[g_S(x) = \begin{cases} x, & c_S(x)=1,\\ k, & c_S(x)=0. \end{cases}\]
- Check that \(g_S\) is total and computable. It always halts because \(c_S\) always halts. It is computable because it only calls \(c_S\) and then returns either \(x\) or the fixed value \(k\).
- Show that the image is exactly \(S\). If \(x \in S\), then \(g_S(x)=x\), so every member of \(S\) appears in the image. If \(x \notin S\), then \(g_S(x)=k\), and \(k\) is already in \(S\). Therefore no value outside \(S\) appears.
Answer: \(S\) is the image of a total computable function, so \(S\) is recursively enumerable.
4.12. Prove the Complement Characterization of Recursive Sets (Lecture 13, Example 4)
Prove Theorem 2:
\[S \text{ is recursive} \iff S \text{ is RE and } \overline{S} \text{ is RE}.\]
Click to see the solution
Key Concept: One semidecision procedure gives yes answers for \(S\). A second semidecision procedure gives yes answers for the complement. Running both in parallel gives a full decision procedure.
Forward direction: assume \(S\) is recursive.
- Since \(S\) is recursive, Theorem 1 gives that \(S\) is RE.
- Since \(S\) is recursive, \(c_S\) is computable. Define the characteristic function of the complement: \[c_{\overline{S}}(x) = 1 - c_S(x).\]
- This function is total and computable, so \(\overline{S}\) is recursive.
- By Theorem 1 again, \(\overline{S}\) is RE.
Therefore, if \(S\) is recursive, both \(S\) and \(\overline{S}\) are RE.
Backward direction: assume \(S\) and \(\overline{S}\) are RE.
- Since \(S\) is RE, there is an enumeration: \[S = \{g_S(0), g_S(1), g_S(2), \ldots\}.\]
- Since \(\overline{S}\) is RE, there is an enumeration: \[\overline{S} = \{g_{\overline{S}}(0), g_{\overline{S}}(1), g_{\overline{S}}(2), \ldots\}.\]
- To decide whether input \(x\) belongs to \(S\), run both enumerations in parallel: \[g_S(0),\ g_{\overline{S}}(0),\ g_S(1),\ g_{\overline{S}}(1),\ldots\]
- Because \(S\) and \(\overline{S}\) partition \(\mathbb{N}\), the value \(x\) must eventually appear in exactly one of the two enumerations.
- If \(x\) appears in the \(S\) enumeration, answer yes. If \(x\) appears in the complement enumeration, answer no.
This procedure always halts, so \(S\) is recursive.
Answer: A set is decidable exactly when both it and its complement are semidecidable. As a corollary, recursive sets are closed under complement.
4.13. Explain Why All Total Computable Functions Cannot Be RE-Captured Exactly (Lecture 13, Task 1)
The lecture states that there is no RE formalism that defines all computable total functions and only them. Explain the diagonalization idea.
Click to see the solution
Key Concept: If all total computable functions could be effectively listed, we could construct a new total computable function that differs from every function on the list.
- Assume an exact RE formalism exists. Suppose there is an effective enumeration of all and only total computable functions: \[f_0,\ f_1,\ f_2,\ldots\]
- Build a diagonal function. Define a new function \[d(n) = f_n(n) + 1.\]
- Check totality. Every \(f_n\) in the enumeration is total by assumption, so \(f_n(n)\) is defined for every \(n\). Therefore \(d(n)\) is defined for every \(n\), so \(d\) is total.
- Check computability. Because the enumeration is effective and each listed function is computable, the diagonal construction can compute \(f_n(n)\) and then add \(1\). Thus \(d\) is computable.
- Derive the contradiction. Since \(d\) is total and computable, it should appear somewhere in the enumeration. Suppose \(d=f_k\). Evaluate both at input \(k\): \[d(k)=f_k(k)+1.\] But if \(d=f_k\), then also \[d(k)=f_k(k).\] These two equalities imply \(f_k(k)=f_k(k)+1\), impossible.
Answer: No recursively enumerable formalism can enumerate exactly all total computable functions. Any attempted complete enumeration can be diagonalized against.
4.14. Derive \(a^3 b^3 c^3\) in the Tutorial CSG (Tutorial 14, Example 1)
Using the tutorial context-sensitive grammar, derive \(aaabbbccc\).
Click to see the solution
Key Concept: The tutorial grammar first creates \(aaaBCBCBC\), then uses context-sensitive swaps to collect all \(B\) markers before all \(C\) markers.
The rules are:
\[S \to aBC\] \[S \to aSBC\] \[CB \to CZ\] \[CZ \to WZ\] \[WZ \to WC\] \[WC \to BC\] \[aB \to ab\] \[bB \to bb\] \[bC \to bc\] \[cC \to cc\]
- Generate three \(a\) symbols and three marker pairs. \[S \Rightarrow aSBC \Rightarrow aaSBCBC \Rightarrow aaaBCBCBC.\]
- Swap each bad \(CB\) pair into \(BC\). The tutorial uses: \[CB \Rightarrow CZ \Rightarrow WZ \Rightarrow WC \Rightarrow BC.\] Therefore: \[aaaBCBCBC \Rightarrow^* aaaBBBCCC.\]
- Convert the \(B\) block. \[aaaBBBCCC \Rightarrow aaabBBCCC \Rightarrow aaabbBCCC \Rightarrow aaabbbCCC.\]
- Convert the \(C\) block. \[aaabbbCCC \Rightarrow aaabbbcCC \Rightarrow aaabbbccC \Rightarrow aaabbbccc.\]
Answer: \(S \Rightarrow^* aaabbbccc\), so the grammar generates the word for \(n = 3\).
4.15. Derive \(a^2 b^2 c^2\) in the Tutorial Unrestricted Grammar (Tutorial 14, Example 2)
Using the tutorial unrestricted grammar, derive \(aabbcc\).
Click to see the solution
Key Concept: The unrestricted grammar can use the direct swap \(CB \to BC\), so the derivation is shorter than in the CSG case.
The grammar is:
\[S \to aBC\] \[S \to aSBC\] \[CB \to BC\] \[aB \to ab\] \[bB \to bb\] \[bC \to bc\] \[cC \to cc\]
Derive:
\[S \Rightarrow aSBC \Rightarrow aaBCBC.\]
Swap the middle \(CB\):
\[aaBCBC \Rightarrow aaBBCC.\]
Convert \(B\) symbols:
\[aaBBCC \Rightarrow aabBCC \Rightarrow aabbCC.\]
Convert \(C\) symbols:
\[aabbCC \Rightarrow aabbcC \Rightarrow aabbcc.\]
Answer: \(S \Rightarrow^* aabbcc\).
4.16. Derive \(a^2 b^2 c^2 d^2\) with an Unrestricted Grammar (Tutorial 14, Example 3)
Using the tutorial grammar, derive \(aabbccdd\) for
\[\{a^n b^n c^n d^n \mid n > 0\}.\]
Click to see the solution
Key Concept: Generate one \(B\), one \(C\), and one \(D\) obligation for each \(a\). Then reorder the markers into \(B^n C^n D^n\) and terminalize from left to right.
The grammar is:
\[S \to aBCD\] \[S \to aSBCD\] \[CB \to BC\] \[DB \to BD\] \[DC \to CD\] \[aB \to ab\] \[bB \to bb\] \[bC \to bc\] \[cC \to cc\] \[cD \to cd\] \[dD \to dd\]
For \(n = 2\):
\[S \Rightarrow aSBCD \Rightarrow aaBCDBCD.\]
Now reorder the second block’s \(B\), \(C\), and \(D\) markers into global block order:
\[aaBCDBCD \Rightarrow aaBCBDCD \Rightarrow aaBBCDCD \Rightarrow aaBBCCDD.\]
Convert \(B\) markers:
\[aaBBCCDD \Rightarrow aabBCCDD \Rightarrow aabbCCDD.\]
Convert \(C\) markers:
\[aabbCCDD \Rightarrow aabbcCDD \Rightarrow aabbccDD.\]
Convert \(D\) markers:
\[aabbccDD \Rightarrow aabbccdD \Rightarrow aabbccdd.\]
Answer: \(S \Rightarrow^* aabbccdd\).